AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Large-scale Retrieval Optimization

# Large-scale Retrieval Optimization

Gte Qwen2 1.5B Instruct 4bit Dwq
Apache-2.0
A 1.5B-parameter general text embedding model based on the Qwen2 architecture, supporting both Chinese and English, focusing on sentence similarity computation and text retrieval tasks.
Text Embedding Transformers
G
mlx-community
22
1
B1ade Embed
This is a merged model composed of multiple foundation models including bert-large-uncased, WhereIsAI/UAE-Large-V1, BAAI/bge-large-en-v1.5, mixedbread-ai/mxbai-embed-large-v1, and avsolatorio/GIST-large-Embedding-v0. It is primarily used for text classification, retrieval, and clustering tasks.
Text Embedding Transformers
B
w601sxs
660
4
Gte Qwen1.5 7B Instruct
Apache-2.0
A 7B-parameter sentence embedding model based on the Qwen1.5 architecture, focusing on sentence similarity calculation and multi-task evaluation
Text Embedding Transformers
G
Alibaba-NLP
253
103
Gte Large
MIT
GTE-Large is a powerful sentence transformer model focused on sentence similarity and text embedding tasks, excelling in multiple benchmark tests.
Text Embedding English
G
thenlper
1.5M
278
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase